Goto

Collaborating Authors

 zt 0


Generating DDPM-based Samples from Tilted Distributions

Mandal, Himadri, Gupta, Dhruman, Gupta, Rushil, Iyer, Sarvesh Ravichandran, Bandyopadhyay, Agniv, Bassamboo, Achal, Gupta, Varun, Juneja, Sandeep

arXiv.org Machine Learning

Given $n$ independent samples from a $d$-dimensional probability distribution, our aim is to generate diffusion-based samples from a distribution obtained by tilting the original, where the degree of tilt is parametrized by $θ\in \mathbb{R}^d$. We define a plug-in estimator and show that it is minimax-optimal. We develop Wasserstein bounds between the distribution of the plug-in estimator and the true distribution as a function of $n$ and $θ$, illustrating regimes where the output and the desired true distribution are close. Further, under some assumptions, we prove the TV-accuracy of running Diffusion on these tilted samples. Our theoretical results are supported by extensive simulations. Applications of our work include finance, weather and climate modelling, and many other domains, where the aim may be to generate samples from a tilted distribution that satisfies practically motivated moment constraints.


Forward and inverse problems for measure flows in Bayes Hilbert spaces

Mis, S. David, de Hoop, Maarten V.

arXiv.org Machine Learning

We study forward and inverse problems for time-dependent probability measures in Bayes--Hilbert spaces. On the forward side, we show that each sufficiently regular Bayes--Hilbert path admits a canonical dynamical realization: a weighted Neumann problem transforms the log-density variation into the unique gradient velocity field of minimum kinetic energy. This construction induces a transport form on Bayes--Hilbert tangent directions, which measures the dynamical cost of realizing prescribed motions, and yields a flow-matching interpretation in which the canonical velocity field is the minimum-energy execution of the prescribed path. On the inverse side, we formulate reconstruction directly on Bayes--Hilbert path space from time-dependent indirect observations. The resulting variational problem combines a data-misfit term with the transport action induced by the forward geometry. In our infinite-dimensional setting, however, this transport geometry alone does not provide sufficient compactness, so we add explicit temporal and spatial regularization to close the theory. The linearized observation operator induces a complementary observability form, which quantifies how strongly tangent directions are seen through the data. Under explicit Sobolev regularity and observability assumptions, we prove existence of minimizers, derive first-variation formulas, establish local stability of the observation map, and deduce recovery of the evolving law, its score, and its canonical velocity field under the strong topologies furnished by the compactness theory.





Algorithm3Primal-DualMethod Initializetheparticles{θi,0}ni=1 andλ0

Neural Information Processing Systems

So we can check that ddtE(qt,λt) (qt,λt) in both cases. Combing the two cases yield the result. Pm i=1N(θ;µi,σ2i) where m is fixed to5 in all the experiments. Monotonic Bayesian Neural Networks In this experiment, we use the COMPAS dataset (J. The task istopredict whether the individual will commit acrime againin2years.



E h dYθt +dbYφt |X0=x i, (24a) logρT(θ;x) LIPF(θ): =E[ Yθs + bYφs | Xs =x,s=0 ] = Z

Neural Information Processing Systems

SB-FBSDE isanewclass ofgenerativemodels that, inspiring bytherecent advance of understanding deep learning through the optimal control perspective [61-63], adopts Lemma 5 to generalize the score-based diffusion models.